1,108 research outputs found
A Computational Theory of Subjective Probability
In this article we demonstrate how algorithmic probability theory is applied
to situations that involve uncertainty. When people are unsure of their model
of reality, then the outcome they observe will cause them to update their
beliefs. We argue that classical probability cannot be applied in such cases,
and that subjective probability must instead be used. In Experiment 1 we show
that, when judging the probability of lottery number sequences, people apply
subjective rather than classical probability. In Experiment 2 we examine the
conjunction fallacy and demonstrate that the materials used by Tversky and
Kahneman (1983) involve model uncertainty. We then provide a formal
mathematical proof that, for every uncertain model, there exists a conjunction
of outcomes which is more subjectively probable than either of its constituents
in isolation.Comment: Maguire, P., Moser, P. Maguire, R. & Keane, M.T. (2013) "A
computational theory of subjective probability." In M. Knauff, M. Pauen, N.
Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of
the Cognitive Science Society (pp. 960-965). Austin, TX: Cognitive Science
Societ
Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory
In this article we review Tononi's (2008) theory of consciousness as
integrated information. We argue that previous formalizations of integrated
information (e.g. Griffith, 2014) depend on information loss. Since lossy
integration would necessitate continuous damage to existing memories, we
propose it is more natural to frame consciousness as a lossless integrative
process and provide a formalization of this idea using algorithmic information
theory. We prove that complete lossless integration requires noncomputable
functions. This result implies that if unitary consciousness exists, it cannot
be modelled computationally.Comment: Maguire, P., Moser, P., Maguire, R. & Griffith, V. (2014). Is
consciousness computable? Quantifying integrated information using
algorithmic information theory. In P. Bello, M. Guarini, M. McShane, & B.
Scassellati (Eds.), Proceedings of the 36th Annual Conference of the
Cognitive Science Society. Austin, TX: Cognitive Science Societ
On the Measurability of Measurement Standards
Pollock (2004) argues in favour of Wittgenstein’s (1953) claim that the standard metre bar in Paris has no metric length: Because the standard retains a special status in the system of measurement, it cannot be applied to itself. However, we argue that Pollock is mistaken regarding the feature of the standard metre which supports its special status. While the unit markings were arbitrarily designated, the constitution, preservation and application of the bar have been scientifically developed to optimize stability, and hence predictive accuracy. We argue that it is the ‘hard to improve’ quality of stability that supports the standard’s value in measurement, not any of its arbitrary features. And because the special status of the prototype is tied to its ability to meet this external criterion, the possibility always exists of identifying an alternative, more stable, standard, thereby allowing the original standard to be measured
Investigating the difference between surprise and probability judgments
Surprise is often defined in terms of disconfirmed expectations, whereby the surprisingness of an event is thought to be dependent on the degree to which that event contrasts with a more likely, or expected, outcome. We propose that surprise is more accurately modelled as a manifestation of an ongoing sense-making process. Specifically, the level of surprise experienced depends on the extent to which an event necessitates representational updating. This sense-making view predicts that differences in subjective probability and surprise arise because of differences in representational specificity rather than differences between an expectation and an outcome. We describe two experiments which support this hypothesis. The results of Experiment 1 demonstrate that generalised representations can allow subjectively low probability outcomes to be integrated without eliciting high levels of surprise, thus providing an explanation for the difference between the two measures. The results of Experiment 2 reveal that the level of contrast between expectation and outcome is not correlated with the difference between probability and surprise. The implications for models of surprise are discussed
On the Measurability of Measurement Standards
Pollock (2004) argues in favour of Wittgenstein’s (1953) claim that the standard metre bar in Paris has no metric length: Because the standard retains a special status in the system of measurement, it cannot be applied to itself. However, we argue that Pollock is mistaken regarding the feature of the standard metre which supports its special status. While the unit markings were arbitrarily designated, the constitution, preservation and application of the bar have been scientifically developed to optimize stability, and hence predictive accuracy. We argue that it is the ‘hard to improve’ quality of stability that supports the standard’s value in measurement, not any of its arbitrary features. And because the special status of the prototype is tied to its ability to meet this external criterion, the possibility always exists of identifying an alternative, more stable, standard, thereby allowing the original standard to be measured
Investigating the difference between surprise and probability judgments
Surprise is often defined in terms of disconfirmed expectations, whereby the surprisingness of an event is thought to be dependent on the degree to which that event contrasts with a more likely, or expected, outcome. We propose that surprise is more accurately modelled as a manifestation of an ongoing sense-making process. Specifically, the level of surprise experienced depends on the extent to which an event necessitates representational updating. This sense-making view predicts that differences in subjective probability and surprise arise because of differences in representational specificity rather than differences between an expectation and an outcome. We describe two experiments which support this hypothesis. The results of Experiment 1 demonstrate that generalised representations can allow subjectively low probability outcomes to be integrated without eliciting high levels of surprise, thus providing an explanation for the difference between the two measures. The results of Experiment 2 reveal that the level of contrast between expectation and outcome is not correlated with the difference between probability and surprise. The implications for models of surprise are discussed
Consciousness is Data Compression
In this article we advance the conjecture that conscious
awareness is equivalent to data compression. Algorithmic
information theory supports the assertion that all forms of
understanding are contingent on compression (Chaitin, 2007).
Here, we argue that the experience people refer to as
consciousness is the particular form of understanding that the
brain provides. We therefore propose that the degree of
consciousness of a system can be measured in terms of the
amount of data compression it carries out
In search of the frog’s tail : investigating the time course of conceptual knowledge activation
Slot-filling theories of conceptual combination assume that
both constituent concepts are activated before they are
combined. However, these theories have difficulty in
explaining why combined phrase features are sometimes more
available than the features of the constituent nouns. In this
study, we investigate the time course of conceptual
knowledge activation. Using three verification tasks of
varying complexity we demonstrate that basic taxonomic
knowledge is retrieved more quickly than modal specific
conceptual features. Applying this finding to conceptual
combination, we demonstrate that participants take longer to
reject combinations requiring the activation of instance
specific features (e.g. frog tail) than those that can be rejected
based on more generalized taxonomic knowledge (e.g.
daffodil tail). These findings provide convergent evidence that
conceptual knowledge is activated dynamically and
selectively rather than all at once. We discuss the implications
for existing theories
Combining Independent Smart Beta Strategies for Portfolio Optimization
Smart beta, also known as strategic beta or factor investing, is the idea of selecting an investment portfolio in a simple rule-based manner that systematically captures market inefficiencies, thereby enhancing risk-adjusted returns above capitalization-weighted benchmarks. We explore the idea of applying a smart strategy in reverse, yielding a "bad beta" portfolio which can be shorted, thus allowing long and short positions on independent smart beta strategies to generate beta neutral returns. In this article we detail the construction of a monthly reweighted portfolio involving two independent smart beta strategies; the first component is a long-short beta-neutral strategy derived from running an adaptive boosting classifier on a suite of momentum indicators. The second component is a minimized volatility portfolio which exploits the observation that low-volatility stocks tend to yield higher risk-adjusted returns than high-volatility stocks. Working off a market benchmark Sharpe Ratio of 0.42, we find that the market neutral component achieves a ratio of 0.61, the low volatility approach achieves a ratio of 0.90, while the combined leveraged strategy achieves a ratio of 0.96. In six months of live trading, the combined strategy achieved a Sharpe Ratio of 1.35. These results reinforce the effectiveness of smart beta strategies, and demonstrate that combining multiple strategies simultaneously can yield better performance than that achieved by any single component in isolation
- …